134 research outputs found
Transfer Function Optimization for Comparative Volume Rendering
Direct volume rendering is often used to compare different 3D scalar fields.
The choice of the transfer function which maps scalar values to color and
opacity plays a critical role in this task. We present a technique for the
automatic optimization of a transfer function so that rendered images of a
second field match as good as possible images of a field that has been rendered
with some other transfer function. This enables users to see whether
differences in the visualizations can be solely attributed to the choice of
transfer function or remain after optimization. We propose and compare two
different approaches to solve this problem, a voxel-based solution solving a
least squares problem, and an image-based solution using differentiable volume
rendering for optimization. We further propose a residual-based visualization
to emphasize the differences in information content
Fast Neural Representations for Direct Volume Rendering
Despite the potential of neural scene representations to effectively compress
3D scalar fields at high reconstruction quality, the computational complexity
of the training and data reconstruction step using scene representation
networks limits their use in practical applications. In this paper, we analyze
whether scene representation networks can be modified to reduce these
limitations and whether such architectures can also be used for temporal
reconstruction tasks. We propose a novel design of scene representation
networks using GPU tensor cores to integrate the reconstruction seamlessly into
on-chip raytracing kernels, and compare the quality and performance of this
network to alternative network- and non-network-based compression schemes. The
results indicate competitive quality of our design at high compression rates,
and significantly faster decoding times and lower memory consumption during
data reconstruction. We investigate how density gradients can be computed using
the network and show an extension where density, gradient and curvature are
predicted jointly. As an alternative to spatial super-resolution approaches for
time-varying fields, we propose a solution that builds upon latent-space
interpolation to enable random access reconstruction at arbitrary granularity.
We summarize our findings in the form of an assessment of the strengths and
limitations of scene representation networks \changed{for compression domain
volume rendering, and outline future research directions
A Streamline-guided De-Homogenization Approach for Structural Design
We present a novel de-homogenization approach for efficient design of
high-resolution load-bearing structures. The proposed approach builds upon a
streamline-based parametrization of the design domain, using a set of
space-filling and evenly-spaced streamlines in the two mutually orthogonal
direction fields that are obtained from homogenization-based topology
optimization. Streamlines in these fields are converted into a graph, which is
then used to construct a quad-dominant mesh whose edges follow the direction
fields. In addition, the edge width is adjusted according to the density and
anisotropy of the optimized orthotropic cells. In a number of numerical
examples, we demonstrate the mechanical performance and regular appearance of
the resulting structural designs, and compare them with those from classic and
contemporary approaches.Comment: 10 pages, 13 figures, submitted to Journal, under revie
Volumetric Isosurface Rendering with Deep Learning-Based Super-Resolution
Rendering an accurate image of an isosurface in a volumetric field typically
requires large numbers of data samples. Reducing the number of required samples
lies at the core of research in volume rendering. With the advent of deep
learning networks, a number of architectures have been proposed recently to
infer missing samples in multi-dimensional fields, for applications such as
image super-resolution and scan completion. In this paper, we investigate the
use of such architectures for learning the upscaling of a low-resolution
sampling of an isosurface to a higher resolution, with high fidelity
reconstruction of spatial detail and shading. We introduce a fully
convolutional neural network, to learn a latent representation generating a
smooth, edge-aware normal field and ambient occlusions from a low-resolution
normal and depth field. By adding a frame-to-frame motion loss into the
learning stage, the upscaling can consider temporal variations and achieves
improved frame-to-frame coherence. We demonstrate the quality of the network
for isosurfaces which were never seen during training, and discuss remote and
in-situ visualization as well as focus+context visualization as potential
application
Neural Fields for Interactive Visualization of Statistical Dependencies in 3D Simulation Ensembles
We present the first neural network that has learned to compactly represent
and can efficiently reconstruct the statistical dependencies between the values
of physical variables at different spatial locations in large 3D simulation
ensembles. Going beyond linear dependencies, we consider mutual information as
a measure of non-linear dependence. We demonstrate learning and reconstruction
with a large weather forecast ensemble comprising 1000 members, each storing
multiple physical variables at a 250 x 352 x 20 simulation grid. By
circumventing compute-intensive statistical estimators at runtime, we
demonstrate significantly reduced memory and computation requirements for
reconstructing the major dependence structures. This enables embedding the
estimator into a GPU-accelerated direct volume renderer and interactively
visualizing all mutual dependencies for a selected domain point
Postprocessing of Ensemble Weather Forecasts Using Permutation-invariant Neural Networks
Statistical postprocessing is used to translate ensembles of raw numerical
weather forecasts into reliable probabilistic forecast distributions. In this
study, we examine the use of permutation-invariant neural networks for this
task. In contrast to previous approaches, which often operate on ensemble
summary statistics and dismiss details of the ensemble distribution, we propose
networks which treat forecast ensembles as a set of unordered member forecasts
and learn link functions that are by design invariant to permutations of the
member ordering. We evaluate the quality of the obtained forecast distributions
in terms of calibration and sharpness, and compare the models against classical
and neural network-based benchmark methods. In case studies addressing the
postprocessing of surface temperature and wind gust forecasts, we demonstrate
state-of-the-art prediction quality. To deepen the understanding of the learned
inference process, we further propose a permutation-based importance analysis
for ensemble-valued predictors, which highlights specific aspects of the
ensemble forecast that are considered important by the trained postprocessing
models. Our results suggest that most of the relevant information is contained
in few ensemble-internal degrees of freedom, which may impact the design of
future ensemble forecasting and postprocessing systems.Comment: Submitted to Artificial Intelligence for the Earth System
- …